114 research outputs found
Remote sensing contributing to assess earthquake risk: from a literature review towards a roadmap
Remote sensing data and methods are widely deployed in order to contribute to
the assessment of numerous components of earthquake risk. While for earthquake hazardrelated
investigations, the use of remotely sensed data is an established methodological
element with a long research tradition, earthquake vulnerabilityâcentred assessments
incorporating remote sensing data are increasing primarily in recent years. This goes along
with a changing perspective of the scientific community which considers the assessment of
vulnerability and its constituent elements as a pivotal part of a comprehensive risk analysis.
Thereby, the availability of new sensors systems enables an appreciable share of remote
sensing first. In this manner, a survey of the interdisciplinary conceptual literature dealing
with the scientific perception of risk, hazard and vulnerability reveals the demand for a
comprehensive description of earthquake hazards as well as an assessment of the present
and future conditions of the elements exposed. A review of earthquake-related remote
sensing literature, realized both in a qualitative and quantitative manner, shows the already
existing and published manifold capabilities of remote sensing contributing to assess
earthquake risk. These include earthquake hazard-related analysis such as detection and
measurement of lineaments and surface deformations in pre- and post-event applications.
Furthermore, pre-event seismic vulnerabilityâcentred assessment of the built and natural
environment and damage assessments for post-event applications are presented. Based on
the review and the discussion of scientific trends and current research projects, first steps
towards a roadmap for remote sensing are drawn, explicitly taking scientific, technical,
multi- and transdisciplinary as well as political perspectives into account, which is
intended to open possible future research activities
TanDEM-X for Large-Area Modeling of Urban Vegetation Height: Evidence from Berlin, Germany
âLarge-area urban ecology studies often miss information
on vertical parameters of vegetation, even though they
represent important constituting properties of complex urban
ecosystems. The new globally available digital elevation model
(DEM) of the spaceborne TanDEM-X mission has an unprecedented
spatial resolution (12 Ă 12 m) that allows us to derive
such relevant information. So far, suitable approaches using a
TanDEM-X DEM for the derivation of a normalized canopy model
(nCM) are largely absent. Therefore, this paper aims to obtain digital
terrain models (DTMs) for the subsequent computation of two
nCMs for urban-like vegetation (e.g., street trees) and forest-like
vegetation (e.g., parks), respectively, in Berlin, Germany, using a
TanDEM-X DEM and a vegetation mask derived from UltraCamX
data. Initial comparisons between morphological DTM-filter
confirm the superior performance of a novel disaggregated progressive
morphological filter (DPMF). For improved assessment
of a DTM for urban-like vegetation, a modified DPMF and image
enhancement methods were applied. For forest-like vegetation, an
interpolation and a weighted DPMF approach were compared.
Finally, all DTMs were used for nCM calculation. The nCM for
urban-like vegetation revealed a mean height of 4.17 m compared
to 9.61 m of a validation nCM. For forest-like vegetation, the mean
height for the nCM of the weighted filtering approach (9.16 m)
produced the best results (validation nCM: 13.55 m). It is concluded
that an nCM from TanDEM-X can capture vegetation
heights in their appropriate dimension, which can be beneficial for
automated height-related vegetation analysis such as comparisons
of vegetation carbon storage between several cities
What's exposed? Mapping elements at risk from space
The world has suffered from severe natural disasters over the last decennium. The earthquake in Haiti in 2010 or the typhoon âHaiyanâ hitting the Philippines in 2013 are among the most prominent examples in recent years. Especially in developing countries, knowledge on amount, location or type of the exposed elements or people is often not given. (Geo)-data are mostly inaccurate, generalized, not up-to-date or even not available at all. Thus, fast and effective disaster management is often delayed until necessary geo-data allow an assessment of effected people, buildings, infrastructure and their respective locations.
In the last decade, Earth observation data and methods have developed a product portfolio from low resolution land cover datasets to high resolution spatially accurate building inventories to classify elements at risk or even assess indirectly population densities. This presentation will give an overview on the current available products and EO-based capabilities from global to local scale.
On global to regional scale, remote sensing derived geo-products help to approximate the inventory of elements at risk in their spatial extent and abundance by mapping and modelling approaches of land cover or related spatial attributes such as night-time illumination or fractions of impervious surfaces. The capabilities and limitations for mapping physical exposure will be discussed in detail using the example of DLRâs âGlobal Urban Footprintâ initiative.
On local scale, the potential of remote sensing particularly lies in the generation of spatially and thematically accurate building inventories for the detailed analysis of the building stockâs physical exposure. Even vulnerability-related indicators can be derived. Indicators such as building footprint, height, shape characteristics, roof materials, location, and construction age and structure type have already been combined with civil engineering approaches to assess building stability for large areas. Especially last generation optical sensors â often in combination with digital surface models â featuring very high geometric resolutions are perceived as advantageous for operational applications, especially for small to medium scale urban areas.
With regard to user-oriented product generation in the FP-7project SENSUM, a multi-scale and multi-source reference database has been set up to systematically screen available products â global to local ones â with regard to data availability in data-rich and data-poor countries. Thus, the higher ranking goal in this presentation is to provide a systematic overview on EO-based data sets and their individual capabilities and limitations with respect to spatial, temporal and thematic details to support decision-making in before, during and after natural disasters
Recommended from our members
Remote monitoring to predict bridge scour failure using Interferometric Synthetic Aperture Radar (InSAR) stacking techniques
Scour is the removal of ground material in water bodies due to environmental changes in water flow. It particularly occurs at bridge piers and the holes formed can make bridges susceptible to collapse. The most common cause of bridge collapse is due to scour occurring during flooding, some failures causing loss of life and most resulting in significant transport disruption and economic loss. Consequently, failure of bridges due to scour is of great concern to bridge asset owners, and is currently very difficult to predict since conventional assessment methods foresee very resource-demanding monitoring efforts in situ. This paper presents evidence of how InSAR techniques can be used to monitor bridges at risk of scour, using Tadcaster Bridge, England, as a case study. Tadcaster Bridge suffered a partial collapse due to river scour on the evening of December 29th, 2015 following a period of severe rainfall and flooding. 48 TerraSAR-X scenes over the bridge from the two-year period prior to the collapse are analysed using the small baseline subset (SBAS) interferometric synthetic aperture radar (InSAR) approach. The study highlights a distinct movement in the region of the bridge where the collapse occurred prior to the actual event. This precursor to failure observed in the data over a month before actual collapse suggests the possible use of InSAR as a means of an early warning system in structural health monitoring of bridges at risk of scour.This work was made possible by EPSRC (UK) Award 1636878, with iCase sponsorship from the National Physical Laboratory and additional funding from Laing OâRourke
Object-based Morphological Profiles for Classification of Remote Sensing Imagery
Morphological operators (MOs) and their enhancements
such as morphological profiles (MPs) are subject to a lively
scientific contemplation since they are found to be beneficial for,
for example, classification of very high spatial resolution panchromatic,
multi-, and hyperspectral imagery. They account for spatial
structures with differing magnitudes and, thus, provide a comprehensive
multilevel description of an image. In this paper, we
introduce the concept of object-based MPs (OMPs) to also encode
shape-related, topological, and hierarchical properties of image
objects in an exhaustive way. Thereby, we seek to benefit from the
so-called object-based image analysis framework by partitioning
the original image into objects with a segmentation algorithm on
multiple scales. The obtained spatial entities (i.e., objects) are used
to aggregate multiple sequences obtained with MOs according to
statistical measures of central tendency. This strategy is followed
to simultaneously preserve and characterize shape properties of
objects and enable both the topological and hierarchical decompositions
of an image with respect to the progressive application of
MOs. Subsequently, supervised classification models are learned
by considering this additionally encoded information. Experimental
results are obtained with a random forest classifier with
heuristically tuned hyperparameters and a wrapper-based feature
selection scheme. We evaluated the results for two test sites of
panchromatic WorldView-II imagery, which was acquired over an
urban environment. In this setting, the proposed OMPs allow for
significant improvements with respect to classification accuracy
compared to standard MPs (i.e., obtained by paired sequences of
erosion, dilation, opening, closing, opening by top-hat, and closing
by top-hat operations)
LSTM models for spatiotemporal extrapolation of population data
The anticipation of future geospatial population distributions is crucial for numerous application domains. Here, we capitalize upon existing gridded population time series data sets, which are provided on an open source basis globally, and implement a machine learning model tailored for time series analysis, i.e., Long Short Term Memory (LSTM) network. In detail, we harvest WorldPop population data and learn an LSTM model for anticipating population along a three-year interval. Experimental results are obtained from Peruâs capital Lima, which features a high population dynamic. To gain insights regarding the competitive performance of LSTM models in this application context, we also implement multilinear regression and Random Forest models for comparison. The results underline the usefulness of temporal models, i.e., LSTM, for forecasting gridded population data
Deep multitask learning with label interdependency distillation for multicriteria street-level image classification
Multitask learning (MTL) aims at beneficial joint solving of multiple prediction problems by sharing information across different tasks. However, without adequate consideration of interdependencies, MTL models are prone to miss valuable information. In this paper, we introduce a novel deep MTL architecture that specifically encodes cross-task interdependencies within the setting of multiple image classification problems. Based on task-wise interim class label probability predictions by an intermediately supervised hard parameter sharing convolutional neural network, interdependencies are inferred in two ways: i) by directly stacking label probability sequences to the image feature vector (i.e., multitask stacking), and ii) by passing probability sequences to gated recurrent unit-based recurrent neural networks to explicitly learn cross-task interdependency representations and stacking those to the image feature vector (i.e., interdependency representation learning). The proposed MTL architecture is applied as a tool for generic multi-criteria building characterization using street-level imagery related to risk assessments toward multiple natural hazards. Experimental results for classifying buildings according to five vulnerability-related target variables (i.e., five learning tasks), namely height, lateral load-resisting system material, seismic building structural type, roof shape, and block position are obtained for the Chilean capital Santiago de Chile. Our MTL methods with cross-task label interdependency modeling consistently outperform single task learning (STL) and classical hard parameter sharing MTL alike. Even when starting already from high classification accuracy levels, estimated generalization capabilities can be further improved by considerable margins of accumulated task-specific residuals beyond +6% Îș. Thereby, the combination of multitask stacking and interdependency representation learning attains the highest accuracy estimates for the addressed task and data setting (up to cross-task accuracy mean values of 88.43% overall accuracy and 84.49% Îș). From an efficiency perspective, the proposed MTL methods turn out to be substantially favorable compared to STL in terms of training time consumption
Semi-supervised learning with constrained virtual support vector machines for classification of remote sensing image data
We introduce two semi-supervised models for the classification of remote sensing image data. The models are built upon the framework of Virtual Support Vector Machines (VSVM). Generally, VSVM follow a two-step learning procedure: A Support Vector Machines (SVM) model is learned to determine and extract labeled samples that constitute the decision boundary with the maximum margin between thematic classes, i.e., the Support Vectors (SVs). The SVs govern the creation of so-called virtual samples. This is done by modifying, i.e., perturbing, the image features to which a decision boundary needs to be invariant. Subsequently, the classification model is learned for a second time by using the newly created virtual samples in addition to the SVs to eventually find a new optimal decision boundary. Here, we extend this concept by (i) integrating a constrained set of semilabeled samples when establishing the final model. Thereby, the model constrainment, i.e., the selection mechanism for including solely informative semi-labeled samples, is built upon a self-learning procedure composed of two active learning heuristics. Additionally, (ii) we consecutively deploy semi-labeled samples for the creation of semi-labeled virtual samples by modifying the image features of semi-labeled samples that have become semi-labeled SVs after an initial model run. We present experimental results from classifying two multispectral data sets with a sub-meter geometric resolution. The proposed semi-supervised VSVM models exhibit the most favorable performance compared to related SVM and VSVM-based approaches, as well as (semi-)supervised CNNs, in situations with a very limited amount of available prior knowledge, i.e., labeled samples
Quality control and error assessment of the Aeolus L2B wind results from the Joint Aeolus Tropical Atlantic Campaign
Since the start of the European Space Agency's Aeolus mission in 2018, various studies were dedicated to the evaluation of its wind data quality and particularly to the determination of the systematic and random errors in the Rayleigh-clear and Mie-cloudy wind results provided in the Aeolus Level-2B (L2B) product. The quality control (QC) schemes applied in the analyses mostly rely on the estimated error (EE), reported in the L2B data, using different and often subjectively chosen thresholds for rejecting data outliers, thus hampering the comparability of different validation studies. This work gives insight into the calculation of the EE for the two receiver channels and reveals its limitations as a measure of the actual wind error due to its spatial and temporal variability. It is demonstrated that a precise error assessment of the Aeolus winds necessitates a careful statistical analysis, including a rigorous screening for gross errors to be compliant with the error definitions formulated in the Aeolus mission requirements. To this end, the modified Z score and normal quantile plots are shown to be useful statistical tools for effectively eliminating gross errors and for evaluating the normality of the wind error distribution in dependence on the applied QC scheme, respectively. The influence of different QC approaches and thresholds on key statistical parameters is discussed in the context of the Joint Aeolus Tropical Atlantic Campaign (JATAC), which was conducted in Cabo Verde in September 2021. Aeolus winds are compared against model background data from the European Centre for Medium-Range Weather Forecasts (ECMWF) before the assimilation of Aeolus winds and against wind data measured with the 2â”m heterodyne detection Doppler wind lidar (DWL) aboard the Falcon aircraft. The two studies make evident that the error distribution of the Mie-cloudy winds is strongly skewed with a preponderance of positively biased wind results distorting the statistics if not filtered out properly. Effective outlier removal is accomplished by applying a two-step QC based on the EE and the modified Z score, thereby ensuring an error distribution with a high degree of normality while retaining a large portion of wind results from the original dataset. After the utilization of the described QC approach, the systematic errors in the L2B Rayleigh-clear and Mie-cloudy winds are determined to be below 0.3âmâsâ1 with respect to both the ECMWF model background and the 2â”m DWL. Differences in the random errors relative to the two reference datasets (Mie vs. model is 5.3âmâsâ1, Mie vs. DWL is 4.1âmâsâ1, Rayleigh vs. model is 7.8âmâsâ1, and Rayleigh vs. DWL is 8.2âmâsâ1) are elaborated in the text.</p
Selection of Unlabeled Source Domains for Domain Adaptation in Remote Sensing
In the context of supervised learning techniques, it can be desirable to utilize existing prior knowledge from a source domain to estimate a target variable in a target domain by exploiting the concept of domain adaptation. This is done to alleviate the costly compilation of prior knowledge, i.e., training data. Here, our goal is to select a single source domain for domain adaptation from multiple potentially helpful but unlabeled source domains. The training data is solely obtained for a source domain if it was identified as being relevant for estimating the target variable in the corresponding target domain by a selection mechanism. From a methodological point of view, we propose unsupervised source selection by voting from (an ensemble of) similarity metrics that follow aligned marginal distributions regarding image features of source and target domains. Thereby, we also propose an unsupervised pruning heuristic to solely include robust similarity metrics in an ensemble voting scheme. We provide an evaluation of the methods by learning models from training data sets created with Level-of-Detail-1 building models and regress built-up density and height on Sentinel-2 satellite imagery. To evaluate the domain adaptation capability, we learn and apply models interchangeably for the four largest cities in Germany. Experimental results underline the capability of the methods to obtain more frequently higher accuracy levels with an improvement of up to almost 10 percentage points regarding the most robust selection mechanisms compared to random source-target domain selections
- âŠ